y
y p
g
possible to extrapolate beyond the interval of the available data
a model has been constructed. The reason that a linear model has
bility is that a linear model can discover the trend relationship
variables.
ver, linear models have a common limitation that they may fail
a complicated data set. This therefore may result in some
ng interpretations if the nonlinearity in data has been missed
modelling process. The nonlinear modelling algorithms such as
etwork learning and then later deep learning algorithms have
eat attention for modelling biological/medical data for knowledge
y and pattern analysis since 1980s. The greatest contribution is the
nt improvement of the accuracy of such a model. Deep learning
promising machine learning strategy has drawn even greater
for biological/medical data analysis and pattern discovery. The
hat deep learning has become popular in biological/medical
is its unique property. When a machine learning model is
ed based on some non-numerical variables, such as the variables
age data or spectra data, a conversion process from non-numerical
to numerical variables has to be followed, which is called the
xtraction. A feature extraction process can generate hundreds or
s of feature variables based on an image data. Some of them may
gh discrimination power, but some may not. Some of them may
y correlated to each other, making some feature variables
t, but some may not. Therefore, after feature variables have been
, a process has to be followed to filter out non-contributing
ariables. This is called a feature selection process. Although there
n many machine learning algorithms developed for both feature
n and feature selection, these two processes are still non-trivial.
the most representative feature variables is also another tedious
n addition to feature extraction and is prone to error. Importantly,
no unique method for this process. When a research objective
or even when there are some changes in the data, a new process
rt again. A deep learning model can incorporate a feature